Explicit estimators of parameters in the Growth Curve model with linearly structured covariance matrices

نویسندگان

  • Martin Ohlson
  • Dietrich von Rosen
چکیده

Estimation of parameters in the classical Growth Curve model, when the covariance matrix has some specific linear structure, is considered. In our examples maximum likelihood estimators can not be obtained explicitly and must rely on optimization algorithms. Therefore explicit estimators are obtained as alternatives to the maximum likelihood estimators. From a discussion about residuals, a simple non-iterative estimation procedure is suggested which gives explicit and consistent estimators of both the mean and the linear structured covariance matrix.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Structure of Wavelet Covariance Matrices and Bayesian Wavelet Estimation of Autoregressive Moving Average Model with Long Memory Parameter’s

In the process of exploring and recognizing of statistical communities, the analysis of data obtained from these communities is considered essential. One of appropriate methods for data analysis is the structural study of the function fitting by these data. Wavelet transformation is one of the most powerful tool in analysis of these functions and structure of wavelet coefficients are very impor...

متن کامل

A Two-Step Regression Method with Connections to Partial Least Squares and the Growth Curve Model

Prediction of a continuous response variable from background data is considered. The independent prediction variable data may have a collinear structure and comprise group effects. A new two-step regression method inspired by PLS (partial least squares regression) is proposed. The proposed new method is coupled to a novel application of the Cayley-Hamilton theorem and a two-step estimation proc...

متن کامل

A Well-Conditioned and Sparse Estimation of Covariance and Inverse Covariance Matrices Using a Joint Penalty

We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-Gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of `1 norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance...

متن کامل

Linear Shrinkage Estimation of Large Covariance Matrices with Use of Factor Models

The problem of estimating large covariance matrices with use of factor models is addressed when both the sample size and the dimension of covariance matrix tend to infinity. In this paper, we consider a general class of weighted estimators which includes (i) linear combinations of the sample covariance matrix and the model-based estimator under the factor model and (ii) ridge-type estimators wi...

متن کامل

JPEN Estimation of Covariance and Inverse Covariance Matrix A Well-Conditioned and Sparse Estimation of Covariance and Inverse Covariance Matrices Using a Joint Penalty

We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of `1 norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • J. Multivariate Analysis

دوره 101  شماره 

صفحات  -

تاریخ انتشار 2010